体育由于其全球影响力和影响力丰富的预测任务,是部署机器学习模型的令人兴奋的领域。但是,由于其规模,准确性和可访问性,传统运动的数据通常不适合研究使用。为了解决这些问题,我们转向电子竞技,这是一个越来越多的域,它涵盖了类似于传统体育的视频游戏。由于电子竞技数据是通过服务器日志而不是外围传感器获取的,因此电子竞技提供了一个独特的机会来获得大量清洁和详细的时空数据,类似于传统运动中收集的数据。为了解析电子竞技数据,我们开发了AWPY,这是一个开源电子竞技游戏日志解析库,可以从游戏日志中提取玩家轨迹和动作。使用AWPY,我们可以从1,558个游戏日志中解析86万动作,79万游戏帧和417K轨迹,从专业的反击比赛中创建电子竞技轨迹和动作(ESTA)数据集。埃斯塔(ESTA)是迄今为止最大,最颗粒状的公共运动数据集之一。我们使用ESTA来开发基准,以使用特定于玩家的信息进行赢得预测。 ESTA数据可在https://github.com/pnxenopoulos/esta上获得,并且AWPY通过PYPI公开。
translated by 谷歌翻译
预测体育运动对球队,联赛,投注者,媒体和球迷来说很重要。鉴于越来越多的播放器跟踪数据,体育分析模型越来越多地利用在播放器跟踪数据上构建的空间衍生功能。但是,由于常见的建模技术依赖于矢量输入,因此不能轻易地将特定于玩家的信息作为功能本身包含。因此,通过空间衍生的特征是根据锚定对象(例如,通过全球功能聚合或通过角色签名方案)构建的,例如,球员在游戏中被指定在游戏中具有独特的作用。在这样做的过程中,我们牺牲了人际关系和地方关系,而是支持全球关系。为了解决这个问题,我们介绍了基于运动的图形表示游戏状态。然后,我们将建议的图表表示作为图形神经网络的输入来预测运动结果。我们的方法可以保留置换不变性,并允许灵活的播放互动权重。我们展示了我们的方法如何为美术和电子竞技的预测任务提供对艺术的统计学显着改善,从而将测试套装损失分别减少了9%和20%。此外,我们展示了如何使用我们的模型来回答运动中的“如果”问题并可视化玩家之间的关系。
translated by 谷歌翻译
分析分类模型性能对于机器学习从业人员来说是一项至关重要的任务。尽管从业者经常使用从混乱矩阵中得出的基于计数的指标,例如准确性,许多应用程序,例如天气预测,体育博彩或患者风险预测,但依赖分类器的预测概率而不是预测标签。在这些情况下,从业者关注的是产生校准模型,即输出反映真实分布的模型的模型。通常通过静态可靠性图在视觉上分析模型校准,但是,由于所需的强大聚合,传统的校准可视化可能会遭受各种缺陷。此外,基于计数的方法无法充分分析模型校准。我们提出校准,这是一个解决上述问题的交互性可靠性图。校准构造一个可靠性图,该图表可抵抗传统方法中的缺点,并允许进行交互式子组分析和实例级检查。我们通过在现实世界和合成数据上的用例中证明了校准的实用性。我们通过与常规分析模型校准的数据科学家进行思考实验的结果来进一步验证校准。
translated by 谷歌翻译
本地解释性方法 - 由于需要从业者将其模型输出合理化,因此寻求为每次预测产生解释的人越来越普遍。然而,比较本地解释性方法很难,因为它们每个都会在各种尺度和尺寸中产生输出。此外,由于一些可解释性方法的随机性质,可以不同地运行方法以产生给定观察的矛盾解释。在本文中,我们提出了一种基于拓扑的框架来从一组本地解释中提取简化的表示。我们通过首先为标量函数设计解释空间和模型预测之间的关系来实现。然后,我们计算这个功能的拓扑骨架。这种拓扑骨架作为这样的功能的签名,我们用于比较不同的解释方法。我们证明我们的框架不仅可以可靠地识别可解释性技术之间的差异,而且提供稳定的表示。然后,我们展示了我们的框架如何用于标识本地解释性方法的适当参数。我们的框架很简单,不需要复杂的优化,并且可以广泛应用于大多数本地解释方法。我们认为,我们的方法的实用性和多功能性将有助于促进基于拓扑的方法作为理解和比较解释方法的工具。
translated by 谷歌翻译
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
translated by 谷歌翻译
This paper presents a machine learning approach to multidimensional item response theory (MIRT), a class of latent factor models that can be used to model and predict student performance from observed assessment data. Inspired by collaborative filtering, we define a general class of models that includes many MIRT models. We discuss the use of penalized joint maximum likelihood (JML) to estimate individual models and cross-validation to select the best performing model. This model evaluation process can be optimized using batching techniques, such that even sparse large-scale data can be analyzed efficiently. We illustrate our approach with simulated and real data, including an example from a massive open online course (MOOC). The high-dimensional model fit to this large and sparse dataset does not lend itself well to traditional methods of factor interpretation. By analogy to recommender-system applications, we propose an alternative "validation" of the factor model, using auxiliary information about the popularity of items consulted during an open-book exam in the course.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
The celebrated FedAvg algorithm of McMahan et al. (2017) is based on three components: client sampling (CS), data sampling (DS) and local training (LT). While the first two are reasonably well understood, the third component, whose role is to reduce the number of communication rounds needed to train the model, resisted all attempts at a satisfactory theoretical explanation. Malinovsky et al. (2022) identified four distinct generations of LT methods based on the quality of the provided theoretical communication complexity guarantees. Despite a lot of progress in this area, none of the existing works were able to show that it is theoretically better to employ multiple local gradient-type steps (i.e., to engage in LT) than to rely on a single local gradient-type step only in the important heterogeneous data regime. In a recent breakthrough embodied in their ProxSkip method and its theoretical analysis, Mishchenko et al. (2022) showed that LT indeed leads to provable communication acceleration for arbitrarily heterogeneous data, thus jump-starting the $5^{\rm th}$ generation of LT methods. However, while these latest generation LT methods are compatible with DS, none of them support CS. We resolve this open problem in the affirmative. In order to do so, we had to base our algorithmic development on new algorithmic and theoretical foundations.
translated by 谷歌翻译
Graph clustering is a fundamental problem in unsupervised learning, with numerous applications in computer science and in analysing real-world data. In many real-world applications, we find that the clusters have a significant high-level structure. This is often overlooked in the design and analysis of graph clustering algorithms which make strong simplifying assumptions about the structure of the graph. This thesis addresses the natural question of whether the structure of clusters can be learned efficiently and describes four new algorithmic results for learning such structure in graphs and hypergraphs. All of the presented theoretical results are extensively evaluated on both synthetic and real-word datasets of different domains, including image classification and segmentation, migration networks, co-authorship networks, and natural language processing. These experimental results demonstrate that the newly developed algorithms are practical, effective, and immediately applicable for learning the structure of clusters in real-world data.
translated by 谷歌翻译
Selecting the number of topics in LDA models is considered to be a difficult task, for which alternative approaches have been proposed. The performance of the recently developed singular Bayesian information criterion (sBIC) is evaluated and compared to the performance of alternative model selection criteria. The sBIC is a generalization of the standard BIC that can be implemented to singular statistical models. The comparison is based on Monte Carlo simulations and carried out for several alternative settings, varying with respect to the number of topics, the number of documents and the size of documents in the corpora. Performance is measured using different criteria which take into account the correct number of topics, but also whether the relevant topics from the DGPs are identified. Practical recommendations for LDA model selection in applications are derived.
translated by 谷歌翻译